From late 2022 onward, the rapid emergence of generative artificial intelligence has transformed both public discourse and professional practice across nearly every field. While the sudden availability of systems such as ChatGPT, DALL·E, and Midjourney might seem unprecedented, the ethical, legal, and governance questions surrounding AI are not new. Correa et al. (2023) highlight that although artificial intelligence has been evolving steadily since the end of the 1980s “AI winter,” what distinguishes the current moment is the scale, accessibility, and potential for societal disruption. The authors argue that significant work is underway to define the values and ideas that should guide AI advances, yet a key challenge remains in establishing a global consensus on those values. Different countries and regions are producing governance frameworks that reflect diverse social, cultural, and political priorities, and researchers and policymakers struggle to compare or align them effectively. Similarly, Deckard (2023) points out that principles alone are insufficient to guide AI development. Instead, the focus must shift toward practical mechanisms such as testing, evaluation, and certification that ensure AI systems behave as intended, remain accountable, and are transparent in operation.
Correa et al. (2023) analyzed over 200 AI governance and ethics documents and found that despite differences in jurisdiction, 17 core principles appear repeatedly, including fairness, accountability, transparency, and respect for human rights. These shared themes suggest a degree of global convergence in normative ideals. However, the authors caution that most documents remain aspirational, lacking concrete enforcement or evaluation mechanisms. Furthermore, the meaning and prioritization of these principles vary significantly across contexts. For example, the concept of fairness may carry distinct implications in high-income countries compared to lower-resource settings, and privacy expectations can conflict with public welfare initiatives. The result is an uneven ethical landscape where some nations can enforce strong data protection and audit requirements, while others must balance innovation with limited regulatory capacity.
This issue raises an important tension between value convergence and contextual divergence. Although the world may agree on broad principles, translating them into actionable policies risks imposing the perspectives of dominant actors, primarily from North America and Europe. Correa et al. note that most AI ethics frameworks are authored by Western institutions, with limited participation from the Global South. This imbalance perpetuates power asymmetries and may lead to forms of digital colonialism where the values of a few nations dictate the technological futures of many. Addressing this requires deliberate inclusion of underrepresented voices in AI policy discussions and capacity-building initiatives that enable equitable participation in both governance and innovation.
Deckard (2023) extends this argument by focusing on operationalization. She emphasizes that the next phase of AI governance must move beyond declarations of principle toward measurable and testable standards. This means creating structured processes for evaluation, auditing, and certification. Without robust testing regimes, ethical commitments risk becoming symbolic rather than functional. The challenge is that AI evaluation itself is technically complex and value-laden. Auditing a large language model for bias, misinformation, or emergent behavior involves both scientific and social judgments. Deckard therefore advocates for interdisciplinary collaboration among engineers, policymakers, and ethicists to design governance mechanisms that are transparent, repeatable, and scientifically credible.
For computing professionals, these insights imply a profound shift in professional responsibility. As generative AI systems become embedded in everyday applications, practitioners must anticipate and mitigate harms before deployment. Traditional professional codes, such as those of the Association for Computing Machinery or IEEE, stress avoiding harm and ensuring fairness but have not yet been adapted to address challenges unique to generative AI. Issues such as data provenance, model alignment, and emergent behavior require new ethical competencies and documentation practices. Without updated standards, individual professionals face uncertainty about their legal liability when AI systems cause harm or misinformation.
In reflecting on how to reconcile these challenges, a multi-level approach to AI governance appears most suitable. At the international level, countries could cooperate to form an overarching organization, an International AI Organization, responsible for certifying national frameworks according to common safety, transparency, and human rights criteria. Similar to the International Civil Aviation Organization, this body would not regulate individual companies directly but would evaluate national jurisdictions for compliance. Countries that fail to meet minimum standards could face restrictions on high-impact AI deployment or access to advanced computing infrastructure. This approach would create a baseline for global consistency while allowing nations flexibility to adapt governance to their social contexts.
At the national level, governments could implement modular governance frameworks that adapt the international baseline to local conditions. For example, modules could address specific domains such as healthcare, law enforcement, or education, incorporating privacy rules, data management requirements, and red-teaming protocols. Independent audits and algorithmic impact assessments should be mandatory for high-risk systems. Public reporting of these audits would improve transparency and encourage trust among citizens. To balance regulation with innovation, liability laws could introduce safe harbors for companies that comply with certified standards, reducing the fear of punitive measures while maintaining accountability.
Socially, this framework would help redistribute trust between the public and AI developers. Transparent auditing processes would allow users to verify whether AI systems meet ethical and safety standards. However, equity concerns remain because smaller firms or low-income countries may struggle to meet costly certification requirements, widening the gap between technological leaders and followers. Capacity-building initiatives, funding programs, and open-source audit tools could help mitigate this risk. Importantly, governance mechanisms should also protect whistleblowers who expose unethical or noncompliant AI practices, reinforcing the integrity of the profession.
Legally, implementing these measures would introduce new complexities. Enforcement of AI-related liability across borders remains uncertain, especially when models are trained or deployed using globally distributed data. Additionally, certification processes could conflict with international trade rules if viewed as barriers to competition. Despite these challenges, the long-term legal benefit lies in creating predictable norms that reduce ambiguity for both developers and regulators. Over time, shared definitions of fairness, accountability, and transparency could become legally codified, providing clearer guidance for courts and policymakers.
On a professional level, computing practitioners must internalize these changes as part of their duty of care. Developing generative AI responsibly means integrating ethical reflection into technical design. Engineers must build models that can be audited, log decision processes, and provide interpretable outputs. Documentation should capture not only the performance metrics but also ethical considerations, including how datasets were sourced and how biases were mitigated. This approach aligns with Deckard’s view that accountability depends on traceability and evaluation rather than abstract ethical commitments.
Personally, engaging with these readings has reshaped how I view the relationship between AI ethics and professional practice. Correa et al. made clear that ethics without enforcement becomes rhetoric, while Deckard illustrated that governance must be empirical, measurable, and iterative. If I were designing or deploying generative models, I would integrate explainability layers and continuous monitoring from the outset. I would also advocate for third-party audits to ensure that claims of fairness or transparency are verifiable. These steps not only reduce risk but also reinforce public trust in AI technologies.
In conclusion, generative AI requires a governance system that bridges global ethical convergence and local contextual needs. International coordination can ensure consistency, while national and professional bodies can adapt standards to their realities. The goal is not merely to regulate but to cultivate a culture of accountability and reflective practice among AI professionals. Correa et al. (2023) demonstrate that consensus on values is possible but incomplete, and Deckard (2023) shows that progress depends on turning those values into verifiable actions. The most responsible course of action is therefore to design governance structures that are inclusive, testable, and grounded in professional ethics, ensuring that generative AI serves humanity equitably rather than amplifying existing disparities.
Correa, J., Diakopoulos, N., and Reis, J. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100893. https://doi.org/10.1016/j.patter.2023.100893
Deckard, A. C. (2023). AI testing and evaluation: Learnings from science and industry. Microsoft Research Podcast. Retrieved from https://www.microsoft.com/en-us/research/podcast/ai-testing-and-evaluation-learnings-from-science-and-industry/
Association for Computing Machinery (ACM). (2018). Code of Ethics and Professional Conduct. Retrieved from https://www.acm.org/code-of-ethics
IEEE. (2020). Ethically Aligned Design, First Edition. IEEE Standards Association. Retrieved from https://ethicsinaction.ieee.org/
Organisation for Economic Co-operation and Development (OECD). (2021). OECD Principles on Artificial Intelligence. Retrieved from https://oecd.ai/en/ai-principles